This study introduces an emotion-aware music recommendation system that adapts song suggestions in real time based on a user’s facial expressions. The proposed framework integrates deep learning techniques for emotion recognition with machine learning algorithms for personalized music selection. Facial emotions are identified using a lightweight MobileNetV2- based convolutional neural network (CNN), ensuring fast and ac- curate detection. The recommendation process employs content- based filtering to align users’ emotional states with suitable music tracks. To maintain user privacy, the system implements secure data handling techniques, including encryption and authentication. Overall, the platform delivers responsive, mood-driven music recommendations with high accuracy and minimal latency, making it suitable for entertainment, emotional therapy, and adaptive music applications.
Introduction
This study presents a real-time, emotion-aware music recommendation system that detects a user’s mood through facial expressions and suggests songs aligned with their current emotion. The system integrates MobileNetV2-based CNN for efficient facial emotion recognition, content-based filtering for mapping emotions to songs, and robust cybersecurity measures (AES-256 encryption, authentication, and access control) to protect user data.
The methodology involves three main components: real-time emotion detection via webcam, emotion-to-song mapping using acoustic and metadata features (tempo, rhythm, valence), and secure GUI/API integration for responsive interaction. The system uses transfer learning on the FER2013 dataset, with data augmentation to improve robustness, and deploys on a scalable cloud infrastructure with GPU acceleration.
A comparative study with VGG16 and ResNet50 confirmed that MobileNetV2 achieves the best balance of accuracy (88.6%) and low latency (162 ms), making it highly suitable for real-time, low-power applications. The system demonstrates high personalization, adaptive playlist generation, and smooth user interaction, highlighting the effectiveness of combining lightweight CNNs with secure, emotion-aware music recommendation frameworks.
Conclusion
This work presents a comprehensive emotion-aware music recommendation framework that seamlessly integrates facial expression recognition with intelligent music selection. The system leverages the efficiency of the MobileNetV2 architecture to identify user emotions such as happiness, sadness, anger, fear, disgust, surprise, and neutrality in real time with high precision and minimal computational overhead. Owing to its lightweight structure, MobileNetV2 enables smooth performance even on devices with constrained hardware, al- lowing for broad applicability across both web-based and mobile environments. The real-time webcam-based emotion detection module enables dynamic interaction, creating an adaptive music experience that responds instantly to user mood variations. The music recommendation engine employs a content- based filtering strategy that aligns detected emotions with corresponding song characteristics, including tempo, rhythm, and lyrical sentiment. This design ensures personalized and contextually relevant song suggestions that enhance user engagement and satisfaction. Security and privacy are prioritized through AES-256 encryption, authentication protocols, and access control mechanisms, protecting user data during trans- mission and interaction. A user-friendly graphical interface developed with Flask facilitates real-time detection, immediate playlist updates, and collection of user feedback for continuous model refinement. The outcomes of this research confirm thatzzz the proposed MobileNetV2-based approach achieves superior recognition accuracy and faster inference compared to conventional mod- els such as VGG16 and ResNet50. Beyond improved performance, the system demonstrates the practical fusion of deep learning, computer vision, and cybersecurity principles into a cohesive real-time solution. In the future, the framework can be expanded to include multimodal emotion sensing by integrating speech tone, physiological signals, or textual sentiment for a more holistic understanding of user affect. Further enhancements may involve reinforcement learning for adaptive playlist optimization and large-scale cloud deployment with GPU acceleration to support concurrent users efficiently. Overall, the study illustrates that emotion-aware recommendation systems have significant potential in applications such as personalized media streaming, mental health therapy, and intelligent in-car entertainment, marking a step forward in human-centered AI-driven personalization.
References
[1] M. B. Mariappan, M. Suk, B. Prabhakaran, “FaceFetch: A User Emotion Driven Multimedia Content Recommendation System Based on Facial Expression Recognition,” in 2012 IEEE International Symposium on Multimedia, 2012, pp. 84–87.
[2] S. M. Florence and M. Uma, “Emotional Detection and Music Recom- mendation System based on User Facial Expression,” IOP Conf. Ser.: Mater. Sci. Eng., vol. 912, p. 062007, 2020.
[3] A. V. Gadagkar, S. Begum, S. Santhosh, and A. S. M. Ashwin, “Emo- tion Recognition and Music Recommendation System based on Facial Expression,” in 2024 International Conference on Recent Advances in Science & Engineering Technology (ICAIT-2024), 2024.
[4] B. Nalini and C. Pinninti, “Efficient Facial Emotion Based Music Recommendation System,” Yigkx.org.cn, 2024.
[5] M. Prasad, G. N. Swetha, and K. M. Riyaz Ali, “Creation of A Music Recommendation System using Facial Expression Recognition with MATLAB,” IJISAE, 2024.
[6] M. Joshi, D. Khimasiya, and U. Limbachiya, “Emotional Detection and Music Recommendation System based on User Facial Expression,” IARJSET, 2024.
[7] G. More, S. Gholap, S. Gayke, V. Hon, and S. Rokade, “Music Recommendation System Using Facial Emotion Gestures,” IJRASET, 2024.
[8] J. B. Maddala et al., “Music Recommendation Based on Facial Expres- sions by using CNN,” IJRASET, 2024.
[9] A. J. Mahir A et al., “Song Recommendation System Based on Facial Emotion,” Goldn Cloud Publications, 2024.
[10] S. Shaik and S. Bhutada, “Deep Learning Approach for Expression- Based Songs Recommendation System,” SpringerLink, 2024.
[11] P. Pathak, R. Arora, A. Gupta, and S. Abrol, “Music Emotion Recogni- tion for Intelligent and Efficient Recommendation Systems,” Springer- Link, 2024.
[12] S. P. Deore, “Enriching Song Recommendation Through Facial Expres- sion Using Deep Learning,” IIETA, 2022/2023.
[13] V. Vijayalakshmi, P. Shrivastav, and G. Thiyagarajan, “Facial Expression Based AI System for Personalized Music Recommendations,” Atlantis Press, 2025.
[14] G. Bagadi, “Facial Emotion Detection and Music Recommendation using Deep Learning,” ResearchGate, 2025.
[15] V. Tsouvalas, T. Ozcelebi, and N. Meratnia, “Privacy-preserving Speech Emotion Recognition through Semi-Supervised Federated Learning,” in Proc. IEEE PerCom Workshops, 2022, pp. 128–133.
[16] A.ója Mabel Rani, M. S. Nivetha, N. M. Jothi Swaroopan, and K. Hari Kumar, “Face Emotion Based Music Recommendation System Using Modified CNN,” in Proc. RMKMATE, 2023, pp. 1–6.
[17] A. Senthil Selvi and Aakash S, “EmoTune: Deep Emotion Detection and Music Recommendation System using MobileNetV3,” in Proc. RMKMATE, 2023, pp. 1–6.
[18] S. L. Adru and S. Johnson, “Harmonizing Emotions: A Fusion of Facial Emotion Recognition and Music Recommendation System,” in Proc. Confluence, 2024, pp. 268–273.
[19] M. Parashakthi and S. Savithri, “Facial Emotion Recognition-Based Mu- sic Recommendation System,” International Journal of Health Sciences, 2022.
[20] N. Mishra, R. Gupta, and A. Raj, “Music Recommendation System by Analyzing Facial Emotions Using Deep Neural Network,” SSRN, 2024.
[21] Rajesh B, Keerthana V, Narayana Darapaneni, and Anwesh Reddy P, “Music Recommendation Based on Facial Emotion Recognition,” arXiv, 2024.
[22] H. Nguyen, “A Model for Song Recommendation Based on Facial Emotion Recognition,” INASS, 2024.
[23] Ashwini et al., “Music Recommendation System Using YOLO v11 for Facial Expression,” IJERT, 2025.
[24] M. Patel et al., “Secure AI Systems for Multimedia Applications,” IEEE Transactions on Multimedia, 2022.
[25] N. Sharma et al., “Data Privacy in Deep Learning Systems,” Journal of Cybersecurity, 2023.